452 research outputs found

    Methods of regularization for computing orbits in celestial mechanics

    Get PDF
    Numerical and analytical methods for orbit computation in celestial mechanics during and beyond collision by introduction of regularized coordinate

    Human Like Adaptation of Force and Impedance in Stable and Unstable Tasks

    Get PDF
    Abstract—This paper presents a novel human-like learning con-troller to interact with unknown environments. Strictly derived from the minimization of instability, motion error, and effort, the controller compensates for the disturbance in the environment in interaction tasks by adapting feedforward force and impedance. In contrast with conventional learning controllers, the new controller can deal with unstable situations that are typical of tool use and gradually acquire a desired stability margin. Simulations show that this controller is a good model of human motor adaptation. Robotic implementations further demonstrate its capabilities to optimally adapt interaction with dynamic environments and humans in joint torque controlled robots and variable impedance actuators, with-out requiring interaction force sensing. Index Terms—Feedforward force, human motor control, impedance, robotic control. I

    Motion plan changes predictably in dyadic reaching

    Get PDF
    Parents can effortlessly assist their child to walk, but the mechanism behind such physical coordination is still unknown. Studies have suggested that physical coordination is achieved by interacting humans who update their movement or motion plan in response to the partner's behaviour. Here, we tested rigidly coupled pairs in a joint reaching task to observe such changes in the partners' motion plans. However, the joint reaching movements were surprisingly consistent across different trials. A computational model that we developed demonstrated that the two partners had a distinct motion plan, which did not change with time. These results suggest that rigidly coupled pairs accomplish joint reaching movements by relying on a pre-programmed motion plan that is independent of the partner's behaviour

    Control of a supernumerary robotic hand by foot: an experimental study in virtual reality.

    Get PDF
    In the operational theater, the surgical team could highly benefit from a robotic supplementary hand under the surgeon's full control. The surgeon may so become more autonomous; this may reduce communication errors with the assistants and take over difficult tasks such as holding tools without tremor. In this paper, we therefore examine the possibility to control a third robotic hand with one foot's movements. Three experiments in virtual reality were designed to assess the feasibility of this control strategy, the learning curve of the subjects in different tasks and the coordination of foot movements with the two natural hands. Results show that the limbs are moved simultaneously, in parallel rather than serially. Participants' performance improved within a few minutes of practice without any specific difficulty to complete the tasks. Subjective assessment by the subjects indicated that controlling a third hand by foot has been easy and required only negligible physical and mental efforts. The sense of ownership was reported to improve through the experiments. The mental burden was not directly related to the level of motion required by a task, but depended on the type of activity and practice. The most difficult task was moving two hands and foot in opposite directions. These results suggest that a combination of practice and appropriate tasks can enhance the learning process for controlling a robotic hand by foot

    Anticipatory detection of turning in humans for intuitive control of robotic mobility assistance

    Get PDF
    Many wearable lower-limb robots for walking assistance have been developed in recent years. However, it remains unclear how they can be commanded in an intuitive and efficient way by their user. In particular, providing robotic assistance to neurologically impaired individuals in turning remains a significant challenge. The control should be safe to the users and their environment, yet yield sufficient performance and enable natural human-machine interaction. Here, we propose using the head and trunk anticipatory behaviour in order to detect the intention to turn in a natural, non-intrusive way, and use it for triggering turning movement in a robot for walking assistance. We therefore study head and trunk orientation during locomotion of healthy adults, and investigate upper body anticipatory behaviour during turning. The collected walking and turning kinematics data are clustered using the k-means algorithm and cross-validation tests and k-nearest neighbours method are used to evaluate the performance of turning detection during locomotion. Tests with seven subjects exhibited accurate turning detection. Head anticipated turning by more than 400–500 ms in average across all subjects. Overall, the proposed method detected turning 300 ms after its initiation and 1230 ms before the turning movement was completed. Using head anticipatory behaviour enabled to detect turning faster by about 100 ms, compared to turning detection using only pelvis orientation measurements. Finally, it was demonstrated that the proposed turning detection can improve the quality of human–robot interaction by improving the control accuracy and transparency

    In a demanding task, three-handed manipulation is preferred to two-handed manipulation.

    No full text
    Equipped with a third hand under their direct control, surgeons may be able to perform certain surgical interventions alone; this would reduce the need for a human assistant and related coordination difficulties. However, does human performance improve with three hands compared to two hands? To evaluate this possibility, we carried out a behavioural study on the performance of naive adults catching objects with three virtual hands controlled by their two hands and right foot. The subjects could successfully control the virtual hands in a few trials. With this control strategy, the workspace of the hands was inversely correlated with the task velocity. The comparison of performance between the three and two hands control revealed no significant difference of success in catching falling objects and in average effort during the tasks. Subjects preferred the three handed control strategy, found it easier, with less physical and mental burden. Although the coordination of the foot with the natural hands increased trial after trial, about two minutes of practice was not sufficient to develop a sense of ownership towards the third arm

    Sensory Integration of Apparent Motion Speed and Vibration Magnitude

    Get PDF
    Tactile apparent motion can display directional information in an intuitive way. It can for example be used to give directions to visually impaired individuals, or for waypoint navigation while cycling on busy streets, when vision or audition should not be loaded further. However, although humans can detect very short tactile patterns, discriminating between similar motion speeds has been shown to be difficult. Here we develop and investigate a method where the speed of tactile apparent motion around the user & #x0027;s wrist is coupled with vibration magnitude. This redundant coupling is used to produce tactile patterns from slow & amp;weak to fast & amp;strong. We compared the just noticeable difference (JND) of the coupled and the individual variables. The results show that the perception of the coupled variable can be characterised by JND smaller than JNDs of the individual variables. This allowed us to create short tactile pattens (tactons) for display of direction and speed, which can be distinguished significantly better than tactons based on motion alone. Additionally, most subjects were also able to identify the coupled-variable tactons better than the magnitude-based tactons

    Haptic Bimanual System for Teleoperation of Time-Delayed Tasks

    Get PDF
    This paper presents a novel teleoperation system, which has been designed to address challenges in the remote control of spaceborne bimanual robotic tasks. The primary interest for designing this system is to assess and increase the efficacy of users performing bimanual tasks, while ensuring the safety of the system and minimising the user's mental load. This system consists of two seven-axis robots that are remotely controlled through two haptic control interfaces. The mental load of the user is monitored using a head-mounted interface, which collects eye gaze data and provides components for the holographic user interface. The development of this system enables the safe execution of tasks remotely, which is a critical building block for developing and deploying future space missions as well as other high-risk tasks

    Computational neurorehabilitation: modeling plasticity and learning to predict recovery

    Get PDF
    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling – regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity

    Interaction with a reactive partner improves learning in contrast to passive guidance

    Get PDF
    • 

    corecore